24 research outputs found

    Visually induced linear vection is enhanced by small physical accelerations

    No full text
    Wong & Frost (1981) showed that the onset latency of visually induced self-rotation illusions (circular vection) can be reduced by concomitant small physical motions (jerks). Here, we tested whether (a) such facilitation also applies for translations, and (b) whether the strength of the jerk (degree of visuo-vestibular cue conflict) matters. 14 naïve observers rated onset, intensity, and convincingness of forward linear vection induced by photorealistic visual stimuli of a street of houses presented on a projection screen (FOV: 75°×58°). For 2/3 of the trials, brief physical forward accelerations (jerks applied using a Stewart motion platform) accompanied the visual motion onset. Adding jerks enhanced vection significantly; Onset latency was reduced by 50, convincingness and intensity ratings increased by more than 60. Effect size was independent of visual acceleration (1.2 and 12m/s^2) and jerk size (about 0.8 and 1.6m/s^2 at participants’ head for 1 and 3cm displacement, respectively), and showed no interactions. Thus, quantitative matching between the visual and physical acceleration profiles might not be as critical as often believed as long as they match qualitatively and are temporally synchronized. These findings could be employed for improving the convincingness and effectiveness of low-cost simulators without the need for expensive, large motion platforms

    Influence of Auditory Cues on the visually-induced Self-Motion Illusion (Circular Vection) in Virtual Reality

    Get PDF
    This study investigated whether the visually induced selfmotion illusion (“circular vection”) can be enhanced by adding a matching auditory cue (the sound of a fountain that is also visible in the visual stimulus). Twenty observers viewed rotating photorealistic pictures of a market place projected onto a curved projection screen (FOV: 54°x45°). Three conditions were randomized in a repeated measures within-subject design: No sound, mono sound, and spatialized sound using a generic head-related transfer function (HRTF). Adding mono sound increased convincingness ratings marginally, but did not affect any of the other measures of vection or presence. Spatializing the fountain sound, however, improved vection (convincingness and vection buildup time) and presence ratings significantly. Note that facilitation was found even though the visual stimulus was of high quality and realism, and known to be a powerful vection-inducing stimulus. Thus, HRTF-based auralization using headphones can be employed to improve visual VR simulations both in terms of self-motion perception and overall presence

    Back-action Evading Measurements of Nanomechanical Motion

    Get PDF
    When performing continuous measurements of position with sensitivity approaching quantum mechanical limits, one must confront the fundamental effects of detector back-action. Back-action forces are responsible for the ultimate limit on continuous position detection, can also be harnessed to cool the observed structure, and are expected to generate quantum entanglement. Back-action can also be evaded, allowing measurements with sensitivities that exceed the standard quantum limit, and potentially allowing for the generation of quantum squeezed states. We realize a device based on the parametric coupling between an ultra-low dissipation nanomechanical resonator and a microwave resonator. Here we demonstrate back-action evading (BAE) detection of a single quadrature of motion with sensitivity 4 times the quantum zero-point motion, back-action cooling of the mechanical resonator to n = 12 quanta, and a parametric mechanical pre-amplification effect which is harnessed to achieve position resolution a factor 1.3 times quantum zero-point motion.Comment: 19 pages (double-spaced) including 4 figures and reference

    Distortion in 3D shape estimation with changes in illumination

    No full text
    In many domains it is very important that observers form an accurate percept of 3-dimensional structure from 2-dimensional images of scenes or objects. This is particularly relevant for designers who need to make decisions concerning the refinement of novel objects that haven't been physically built yet. This study presents the results of two experiments whose goal was to test the effect of lighting direction on the shape perception of smooth surfaces using shading and lighting techniques commonly used in modeling and design software. The first experiment consisted of a 2 alternate forced choice task which compared the effect of the amount of shape difference between smooth surfaces lit by a single point light with whether the position of the light sources were the same or different for each surface. Results show that, as the difference between the shapes decreased, participants were more and more biased towards choosing the match shape lit by the same source as the test shape. In the second experiment, participants had to report the orientation at equivalent probe locations on pairs of smooth surfaces presented simultaneously, using gauge figures. The surfaces could either be the same or slightly different and the light source of each shape could either be at the same relative location or offset by 90° horizontally. Participants reported large differences in surface orientation when the lighting condition was different, even when the shapes were the same, confirming the first results. Our findings show that lighting conditions can have a strong effect on 3-dimensional perception, and suggest that great care should be taken when projection systems are used for 3D visualisation where an accurate representation is required, either by carefully choosing lighting conditions or by using more realistic rendering techniques

    Lighting Direction Affects Perceived Shape from Shading

    No full text
    It has been known for a long time that many cues contribute to the perception of 3D shape from 2D images, such as shape from shading, textures, occlusions or reflection of the surrounding environment. However, little is known about the influence of lighting conditions on the correct mental reconstruction of 3D shapes. In order to investigate this, we have run a set of experiments asking participants to report differences in surface orientation of unknown, smooth surfaces, using different methods. The first experiment consisted of a 2AFC in which subjects had to identify which of two test objects had the same shape as the target. The stimuli were computer generated irregularly-shaped smooth surfaces, illuminated by a single point light source. For both test stimuli, the position of the light sources could either be different from or the same as the target. Results show that, as the amount of shape difference became smaller, participants were more and more biased towards choosing the match shape lit by the same source as the target. In the second experiment, participants had to report the perceived orientation of the surfaces at various locations by adjusting gauge figures.. The surfaces could either be the same or slightly different and the light source of each shape could either be the same or offset by 90 degrees horizontally. Participants’ matches revealed large differences in perceived surface orientations when the lighting was different, even when the shapes were the same, confirming the first results. Our findings show that lighting conditions can play a substantial role in the perception of 3D structure of objects from their 2D representation. We also discuss the implication of this in the domain of computer aided visualisation

    Auditory cues can facilitate the visually-induced self-motion illusion (circular vection) in Virtual Reality

    No full text
    There is a long tradition of investigating the self-motion illusion induced by rotating visual stimuli ("circular vection"). Recently, Larsson et al. (2004)[1] showed that up to 50 of participants could also get some vection from rotating sound sources while blindfolded, replicating findings from Lackner (1977)[2]. Compared to the compelling visual illusion, though, auditory vection is rather weak and much less convincing. Here, we tested whether adding an acoustic landmark to a rotating visual photorealistic stimulus of a natural scene can improve vection. Twenty observers viewed rotating stimuli that were projected onto a curved projection screen (FOV: 54°x40.5°). The visual scene rotated around the earth-vertical axis at 30°/s. Three conditions were randomized in a repeated measures within-subject design: No-sound, mono-sound, and 3D-sound using a generic head-related transfer function (HRTF). Adding mono-sound showed only minimal tendencies towards increased vection and did not affect presence-ratings at all, as assessed using the Schubert et al. (2001) presence questionnaire [3]. Vection was, however, slightly but significantly improved by adding a rotating 3D-sound source that moved in accordance with the visual scene: Convincingness ratings increased from 60.2 (mono-sound) to 69.6 (3D-sound) (t(19)=-2.84, p=.01), and vection buildup-times decreased from 12.5s (mono-sound) to 11.1s (3D-sound) (t(19)=2.69, p=.015). Furthermore, overall presence ratings were increased slightly but significantly. Note that vection onset times were not significantly affected (9.6s vs. 9.9s, p>.05). We conclude that adding spatialized 3D-sound that moves concordantly with a visual self-motion simulation does not only increase overall presence, but also improves the self-motion sensation itself. The effect size for the vection measures was, however, rather small (about 15), which might be explained by a ceiling effect, as visually induced vection was already quite strong without the 3D-sound (9.9s vection onset time). Merely adding non-spatialized (mono) sound did not show any clear effects. These results have important implications for the understanding or multi-modal cue integration in general and self-motion simulations in Virtual Reality in particular

    Localisation errors during active control of a target object

    No full text
    When a drifting grating is viewed through a stationary aperture, the global position of the aperture is displaced in the direction of local motion (Ramachandran \& Anstis, 1990, Perception, 19, 611-616). The purpose of the current study was to assess whether such displacement continues to occur when observers actively control the global position of the aperture. We created a simple game in which observes where given a birds-eye-view of a curving pathway along which they had to guide a target object. The target object was a gabor patch with a spatial frequency of 1 cycle per degree and an extent of approximately 2.5 degrees visual angle. The pathway was scrolled downwards to create the impression that the object was moving upwards along the path at a constant velocity. The vertical position of the target object was fixed at the centre of the screen, and a joystick was used to adjust the horizontal position so that the aperture was always centred on the pathway. In separate blocks we varied the speed of local motion in the aperture from 0 to 3 cycles per second, in steps of 0.5 cycles. When the grating was stationary, observes were able to guide the target object along the path with virtually no errors. As the speed of local motion increased errors also increased, reaching an asymptote of 27 min arc at 1.5 cycles per second. These results suggest that active control of an object cannot overcome the perceptual displacements induced by the drifting grating

    Motion-induced localization bias in an action task

    No full text
    DeValois and DeValois (Vis Research, 31, 1619-1626) have shown that a moving carrier behind a stationary window can cause a perceptual misplacement of this envelope in direction of motion. The authors also found that the bias increased with increasing carrier speed and eccentrcity. Yamagishi et al. (2001, Proceedings of the Royal Society, 268, 973-977) showed that this effect can also be found in visuo-motor tasks. To see whether variables such as eccentricity and grating speed increase the motion-induced perceptual shift of a motion field also in an action task, a motor-control experiment was created in which these variables were manipulated (eccentricity values: 0 deg, 8.4 deg and 16.8; speed values: 1.78 deg/sec, 4.45 deg/sec and 7.1 deg/sec). Participants had to keep a downward-sliding path aligned with a motion field (stationary Gaussian and horizontally moving carrier) by manipulating the path with a joystick. The perceptual bias can be measured by comparing the average difference between correct and actual path position. Both speed and eccentricty had a significant impact on the bias size. Similarly to the recognition task, the bias size increased with increasing carrier speed. Contrary to DeValois and DeValois’ finding, here the perceptual shift decreased with increasing eccentricity. There was no interaction of the variables. If we assume an ecological reason for the existence of a motion-induced bias, it might be plausible to see why the bias is smaller in an unnatural task such as actively manipulating an object that is in an eccentric position in the visual field (hence the decrease of bias magnitude in the periphery). Contrary to this, recognition tasks carried out in the periphery of the visual field are far more common and therefore might “benefit” from the existence of a motion-induced localization bias. As expected, task difficulty increased with increasing speed and eccentricity. It seems interesting to further compare action and perception tasks in terms of factors influencing the localization bias in these different task types

    Spatialized auditory cues enhance the visually-induced self-motion illusion (circular vection) in Virtual Reality

    No full text
    “Circular vection” refers to the illusion of self-motion induced by rotating visual or auditory stimuli. Visually induced vection can be quite compelling, and the illusion has been investigated extensively for over a century. Rotating auditory cues can also induce vection, but only in about 25-60% of blindfolded participants (Lackner, 1977; Larsson et al., 2004). Furthermore, auditory vection is much weaker and far less compelling than visual vection, which can be indistinguishable from real motion. Here, we investigated whether an additional auditory cue (the sound of a fountain that is also visible in the visual stimulus) can be utilized to enhance visually induced self-motion perception. To the best of our knowledge, this is the first study directly addressing audio-visual contributions to vection. Twenty observers viewed rotating photorealistic pictures of a natural scene projected onto a curved projection screen (FOV: 54x45). Three conditions were randomized in a repeated mea- sures within-subject design: No sound, mono sound, and spatialized sound using a generic head-related transfer function (HRTF). Adding mono sound to the visual vection stimulus increased convincingness ratings marginally, but did not affect vection onset time, vection buildup time, vection intensity, or rated presence. Spatializing the fountain sound such that it moved in accordance with the fountain in the visual scene, however, improved vection significantly in terms of convincingness, vection buildup time, and presence ratings. The effect size for the vection measures was, however, rather small (<16%). This might be related to a ceiling effect, as visually induced vection was already quite strong without the spatialized sound (10s vection onset time). Despite the small effect size, this study shows that HRTF-based auralization using headphones can be employed to improve visual VR simulations both in terms of self-motion perception and overall presence. Note that facilitation was found even though the visual stimulus was of high quality and realism, and known to be quite powerful in inducing vection. These findings have important implications both for the understanding of cross-modal cue integration and for optimizing VR simulations

    Motion-Induced Shift and Navigation in Virtual Reality

    No full text
    De Valois and De Valois [1] showed that moving Gabors (cosine gratings windowed by a stationary 2-dimensional Gaussian envelope) are locally misperceived in their direction of motion. In a pointing task, Yamagishi, Anderson and Ashida [2] reported even stronger visuo-motor localization error especially when participants had to make a speeded response. Here, we examined motion-induced bias in the context of an active navigation task, a situation in which perception and action are tightly coupled. Participants were presented with a birds-eye view of a vertically moving contour that simulated observer motion along a path. Observers centrally fixated while the path and a moving Gabor target were presented peripherally. The task was to follow the path with the moving Gabor, whose position (left/right) and direction(towards left/right) were varied in separate blocks. Gabor eccentricity was constant relative to fixation, with observers adjusting their simulated position with a joystick. Deviations from the path were analyzed as a function of Gabor direction. We found large and consistent misalignment in the direction of the moving Gabor, indicating that global position/motion judgments during action can be strongly affected by irrelevant local motion signals
    corecore